76 research outputs found

    Using Dedicated and Opportunistic Networks in Synergy for a Cost-effective Distributed Stream Processing Platform

    Full text link
    This paper presents a case for exploiting the synergy of dedicated and opportunistic network resources in a distributed hosting platform for data stream processing applications. Our previous studies have demonstrated the benefits of combining dedicated reliable resources with opportunistic resources in case of high-throughput computing applications, where timely allocation of the processing units is the primary concern. Since distributed stream processing applications demand large volume of data transmission between the processing sites at a consistent rate, adequate control over the network resources is important here to assure a steady flow of processing. In this paper, we propose a system model for the hybrid hosting platform where stream processing servers installed at distributed sites are interconnected with a combination of dedicated links and public Internet. Decentralized algorithms have been developed for allocation of the two classes of network resources among the competing tasks with an objective towards higher task throughput and better utilization of expensive dedicated resources. Results from extensive simulation study show that with proper management, systems exploiting the synergy of dedicated and opportunistic resources yield considerably higher task throughput and thus, higher return on investment over the systems solely using expensive dedicated resources.Comment: 9 page

    A Taxonomy of Network Computing Systems

    Get PDF
    Rapid advances in networking and microprocessor technologies have led to the emergence of Internet-wide distributed computing systems ranging from simple LAN-based clusters to planetary-scale networks. As these network computing systems evolve by combining the best features of existing systems, differences among NCs are blurring. To address this problem, researchers have proposed formal taxonomies of NC systems. We propose a new taxonomy that is both broad enough to encompass all NC systems and simple enough to be widely used

    Measuring Scalability of Resource Management Systems

    Get PDF
    Scalability refers to the extent of configuration modifications over which a system continues to be economically deployable. Until now, scalability of resource management systems (RMSs) has been examined implicitly by studying different performance measures of the RMS designs for different parameters. However, a framework is yet to be developed for quantitatively evaluating scalability to unambiguously examine the trade-offs among the different RMS designs. In this paper, we present a methodology to study scalability of RMSs based on overhead cost estimation. First, we present a performance model for a managed distributed system (e.g., Grid computing system) that separates the manager and managee. Second, based on the performance model we present a metric used to quantify the scalability of a RMS. Third, simulations are used to apply the proposed scalability metric to selected RMSs from the literature. The results show that the proposed metric is useful in quantifying the scalabilities of the RMSs
    • …
    corecore